608 research outputs found

    Rounding Algorithms for a Geometric Embedding of Minimum Multiway Cut

    Full text link
    The multiway-cut problem is, given a weighted graph and k >= 2 terminal nodes, to find a minimum-weight set of edges whose removal separates all the terminals. The problem is NP-hard, and even NP-hard to approximate within 1+delta for some small delta > 0. Calinescu, Karloff, and Rabani (1998) gave an algorithm with performance guarantee 3/2-1/k, based on a geometric relaxation of the problem. In this paper, we give improved randomized rounding schemes for their relaxation, yielding a 12/11-approximation algorithm for k=3 and a 1.3438-approximation algorithm in general. Our approach hinges on the observation that the problem of designing a randomized rounding scheme for a geometric relaxation is itself a linear programming problem. The paper explores computational solutions to this problem, and gives a proof that for a general class of geometric relaxations, there are always randomized rounding schemes that match the integrality gap.Comment: Conference version in ACM Symposium on Theory of Computing (1999). To appear in Mathematics of Operations Researc

    Huckleberry Finn

    Get PDF
    https://digitalcommons.library.umaine.edu/mmb-vp/1620/thumbnail.jp

    Make Research Data Public? -- Not Always so Simple: A Dialogue for Statisticians and Science Editors

    Get PDF
    Putting data into the public domain is not the same thing as making those data accessible for intelligent analysis. A distinguished group of editors and experts who were already engaged in one way or another with the issues inherent in making research data public came together with statisticians to initiate a dialogue about policies and practicalities of requiring published research to be accompanied by publication of the research data. This dialogue carried beyond the broad issues of the advisability, the intellectual integrity, the scientific exigencies to the relevance of these issues to statistics as a discipline and the relevance of statistics, from inference to modeling to data exploration, to science and social science policies on these issues.Comment: Published in at http://dx.doi.org/10.1214/10-STS320 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    How should institutions help clinicians to practise greener anaesthesia : first-order and second-order responsibilities to practice sustainably

    Get PDF
    There is a need for all industries, including healthcare, to reduce their greenhouse gas emissions. In anaesthetic practice, this not only requires a reduction in resource use and waste, but also a shift away from inhaled anaesthetic gases and towards alternatives with a lower carbon footprint. As inhalational anaesthesia produces greenhouse gas emissions at the point of use, achieving sustainable anaesthetic practice involves individual practitioner behaviour change. However, changing the practice of healthcare professionals raises potential ethical issues. The purpose of this paper is twofold. First, we discuss what moral duties anaesthetic practitioners have when it comes to practices that impact the environment. We argue that behaviour change among practitioners to align with certain moral responsibilities must be supplemented with an account of institutional duties to support this. In other words, we argue that institutions and those in power have second-order responsibilities to ensure that practitioners can fulfil their first-order responsibilities to practice more sustainably. The second goal of the paper is to consider not just the nature of second-order responsibilities but the content. We assess four different ways that second-order responsibilities might be fulfilled within healthcare systems: removing certain anaesthetic agents, seeking consensus, education and methods from behavioural economics. We argue that, while each of these are a necessary part of the picture, some interventions like nudges have considerable advantages

    HyperNetX: A Python package for modeling complex network data as hypergraphs

    Full text link
    HyperNetX (HNX) is an open source Python library for the analysis and visualization of complex network data modeled as hypergraphs. Initially released in 2019, HNX facilitates exploratory data analysis of complex networks using algebraic topology, combinatorics, and generalized hypergraph and graph theoretical methods on structured data inputs. With its 2023 release, the library supports attaching metadata, numerical and categorical, to nodes (vertices) and hyperedges, as well as to node-hyperedge pairings (incidences). HNX has a customizable Matplotlib-based visualization module as well as HypernetX-Widget, its JavaScript addon for interactive exploration and visualization of hypergraphs within Jupyter Notebooks. Both packages are available on GitHub and PyPI. With a growing community of users and collaborators, HNX has become a preeminent tool for hypergraph analysis.Comment: 3 pages, 2 figure

    TPU v4: An Optically Reconfigurable Supercomputer for Machine Learning with Hardware Support for Embeddings

    Full text link
    In response to innovations in machine learning (ML) models, production workloads changed radically and rapidly. TPU v4 is the fifth Google domain specific architecture (DSA) and its third supercomputer for such ML models. Optical circuit switches (OCSes) dynamically reconfigure its interconnect topology to improve scale, availability, utilization, modularity, deployment, security, power, and performance; users can pick a twisted 3D torus topology if desired. Much cheaper, lower power, and faster than Infiniband, OCSes and underlying optical components are <5% of system cost and <3% of system power. Each TPU v4 includes SparseCores, dataflow processors that accelerate models that rely on embeddings by 5x-7x yet use only 5% of die area and power. Deployed since 2020, TPU v4 outperforms TPU v3 by 2.1x and improves performance/Watt by 2.7x. The TPU v4 supercomputer is 4x larger at 4096 chips and thus ~10x faster overall, which along with OCS flexibility helps large language models. For similar sized systems, it is ~4.3x-4.5x faster than the Graphcore IPU Bow and is 1.2x-1.7x faster and uses 1.3x-1.9x less power than the Nvidia A100. TPU v4s inside the energy-optimized warehouse scale computers of Google Cloud use ~3x less energy and produce ~20x less CO2e than contemporary DSAs in a typical on-premise data center.Comment: 15 pages; 16 figures; to be published at ISCA 2023 (the International Symposium on Computer Architecture
    • …
    corecore